Optimal uncertainty-guided neural network training

نویسندگان

چکیده

The neural network (NN)-based direct uncertainty quantification (UQ) methods have achieved the state of art performance since first inauguration, known as lower–upper-bound estimation (LUBE) method. However, currently-available cost functions for guided NN training are not always converging, and all converged NNs do generate optimized prediction intervals (PIs). In recent years researchers proposed different quality criteria PIs that raise a question about their relative effectiveness. Most existing customizable, convergence is uncertain. Therefore, in this paper, we propose highly customizable smooth function developing to construct optimal PIs. method computes average width PIs, PI-failure distances, PI coverage probability (PICP) test dataset. We examine wind power generation, electricity demand, temperature forecast datasets. Results show reduces variation accelerates training, improves from 99.2% 99.8%. • Proposed training. NN-training becomes faster, higher. customizable. Present philosophies techniques. demand.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Incremental Convolutional Neural Network Training

Experimenting novel ideas on deep convolutional neural networks (DCNNs) with big datasets is hampered by the fact that network training requires huge computational resources in the terms of CPU and GPU power and hours. One option is to downscale the problem, e.g., less classes and less samples, but this is undesirable with DCNNs whose performance is largely data-dependent. In this work, we take...

متن کامل

Accelerating Recurrent Neural Network Training

An efficient algorithm for recurrent neural network training is presented. The approach increases the training speed for tasks where a length of the input sequence may vary significantly. The proposed approach is based on the optimal batch bucketing by input sequence length and data parallelization on multiple graphical processing units. The baseline training performance without sequence bucket...

متن کامل

Improving Neural Network Classification Training

IMPROVING NEURAL NETWORK CLASSIFICATION TRAINING Michael E. Rimer Department of Computer Science Doctor of Philosophy The following work presents a new set of general methods for improving neural network accuracy on classification tasks, grouped under the label of classification-based methods. The central theme of these approaches is to provide problem representations and error functions that m...

متن کامل

Variable projections neural network training

8 The training of some types of neural networks leads to separable non-linear least squares problems. These problems may be 9 ill-conditioned and require special techniques. A robust algorithm based on the Variable Projections method of Golub and Pereyra 10 is designed for a class of feed-forward neural networks and tested on benchmark examples and real data.

متن کامل

Training a Quantum Neural Network

Quantum learning holds great promise for the field of machine intelligence. The most studied quantum learning algorithm is the quantum neural network. Many such models have been proposed, yet none has become a standard. In addition, these models usually leave out many details, often excluding how they intend to train their networks. This paper discusses one approach to the problem and what adva...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applied Soft Computing

سال: 2021

ISSN: ['1568-4946', '1872-9681']

DOI: https://doi.org/10.1016/j.asoc.2020.106878